翻訳と辞書
Words near each other
・ Catch as Catch Can (1937 film)
・ Catch as Catch Can (1967 film)
・ Catch as Catch Can (album)
・ Catch as Cats Can
・ Catastrophe Keeps Us Together
・ Catastrophe modeling
・ Catastrophe of Sange
・ Catastrophe theory
・ Catastrophea
・ Catastrophic antiphospholipid syndrome
・ Catastrophic crop insurance
・ Catastrophic failure
・ Catastrophic Health Emergency Powers Act
・ Catastrophic illness
・ Catastrophic injury
Catastrophic interference
・ Catastrophic kill
・ Catastrophic optical damage
・ Catastrophic schizophrenia
・ Catastrophin
・ Catastrophism
・ Catatan Si Boy
・ Catathelasma
・ Catathelasma evanescens
・ Catathelasma imperiale
・ Catathrenia
・ Catathyridium
・ Catatia
・ Catatinagma
・ Catatinagma kraterella


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Catastrophic interference : ウィキペディア英語版
Catastrophic interference

Catastrophic Interference, also known as catastrophic forgetting, is the tendency of a artificial neural network to completely and abruptly forget previously learned information upon learning new information.〔McCloskey, M. & Cohen, N. (1989) Catastrophic interference in connectionist networks: The sequential learning problem. In G. H. Bower (ed.) ''The Psychology of Learning and Motivation'',''24'', 109-164〕〔Ratcliff, R. (1990) Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions. ''Psychological Review'',''97'', 285-308〕 Neural networks are an important part of the network approach and connectionist approach to cognitive science. These networks use computer simulations to try and model human behaviours, such as memory and learning. Catastrophic interference is an important issue to consider when creating connectionist models of memory. It was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989),〔 and Ractcliff (1990).〔 It is a radical manifestation of the ‘sensitivity-stability’ dilemma 〔Hebb, D.O. (1949). ``Organization of Behaviour``. New York: Wiley〕 or the ‘stability-plasticity’ dilemma.〔Caroebterm G., & Grossberg, S. (1987) ART 2: Self-organization of stable category recognition codes for analog input patterns. ``Applied Optics, 26``, 4919-4930〕 Specifically, these problems refer to the issue of being able to make an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum.〔French, R. M. (1997) Pseudo-recurrent connectionist networks: an approach to the ‘sensitivity-stability’ dilemma. ''Connection Science'', ''9''(4), 353–379.〕 The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. On the other hand, connectionst networks like the standard backpropagation network are very sensitive to new information and can generalize on new inputs. Backpropagation models can be considered good models of human memory insofar as they mirror the human ability to generalize but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is considered an issue when attempting to model human memory because, unlike these networks, humans typically do not show catastrophic forgetting. Thus, the issue of catastrophic interference must be eradicated from these backpropagation models in order to enhance the plausibility as models of human memory.
==Artificial Neural Networks: Standard Backpropagation Networks and Their Training==

In order to understand the topic of catastrophic interference it is important to understand the components of an artificial neural network and, more specifically, the behaviour of a backpropagation network. The following account of neural networks is summarized from ''Rethinking Innateness: A Connectionist Perspective on Development'' by Elman et al. (1996).〔Elman, J., Karmiloff-Smith, A., Bates, E., & Johnson, M. (1996). ``Rethinking Innateness: A Connectionist Perspective on Development.`` Cambridge, MA: MIT Press.〕
Artificial neural networks are inspired by biological neural networks. They use mathematical models, namely algorithms, to do things such as classifying data and learning patterns in data. Information is represented in these networks through patterns of activation, known as a distributed representations.
The basic components of artificial neural networks are ''nodes/units'' and ''weights''.
''Nodes'' or ''units'' are simple processing elements, which can be considered artificial neurons. These units can act in a variety of ways. They can act like sensory neurons and collect inputs from the environment, they can act like motor neurons and sent and output, they can act like interneurons and relay information, or they may do all three functions. A backpropagation network is often a three-layer neural network that includes input nodes, hidden nodes, and output nodes (see Figure 1). The hidden nodes allow the network to be transformed into an internal representation, akin to a mental representation. These internal representations give the backpropagation network its ability to capture abstract relationships between different input patterns.
The nodes are also connected to each other, thus they can send activation to one another like neurons. These connections can be unidirectional, creating a feedforward network, or they can be bidirectional, creating a recurrent network. Each of the connections between the nodes has a ``weight``, or strength, and it is in these weights are where the knowledge is ‘stored’. The weights act to multiply the output of a node. They can be excitatory (a positive value) or inhibitory (a negative value). For example, if a node has an output of 1.0 and it is connected to another node with a weight of -0.5 then the second node will receive an input signal of (1.0 x -0.5) = -0.5. Since any one node can receive multiple inputs, the sum of all of these inputs must be taken to calculate the net input.
The ''net input'' (''net''i) to a node ''j'' would be defined as:
neti = Σ''w''ij''o''j
''w''ij = the weight between node ''i'' and ''j''
''o''j = the input vector / activation
Once the input has been sent to the hidden layer from the input layer, the hidden node may then send an output to the output layer. The output of any given node depends on the activation of that node and the response function of that node. In the case of a three-layer backpropagation network, the response function is a non-linear, logistic function. This function allows a node to behave in an all or none fashion towards high or low input values and in a more graded and sensitive fashion towards mid-ranged input values. It allows the nodes the result in more substantial changes in the network when the node activation is at the more extreme values. Transforming the net input into a ''net output'' that can be sent onto the output layer is calculated by:

''o''i = 1/(+exp(neti) )
''o''i = the activation of node ''i''
An important feature of neural networks is that they can learn. Simply put, this means that they can change their outputs when they are given new inputs. Backpropagation, specifically refers to how this the network is trained, i.e. how the network is told to learn. The way in which a backpropagation network learns, is through comparing the actual output to the desired output of the unit. The desired output is known as a 'teacher' and it can be the same as the input, as in the case of auto-associative/auto-encoder networks, or it can be completely different from the input. Either way, learning which requires a teacher is called supervised learning. The difference between these actual and desired output constitutes an error signal. This error signal is then fedback, or backpropagated, to the nodes in order to modify the weights in the neural network. Backpropagation first modifies the weights between output layer to the hidden layer, then next modifies the weights between the hidden units and the input units. The change in weights help to decrease the discrepancy between the actual and desired output. However, learning is typically incremental in these networks. This means that the these networks will require a series of presentations of the same input before it can come up with the weight changes that will result in the desired output. The weights are usually set to random values for first learning trial and after many trials the weights become more able represent the desired output. The process of converging on an output is called settling. This kind of training is based on the error signal and ''backpropagation learning algorithm'' / delta rule:
''Error signal at output element'': ''e'' = (''t''i - ''o''i)''o''i(1-''o''i)
''Error signal at the hidden unit'': ''e'' = ''o''i(1-''o''i)Σ''w''ik''e''
''Weight change'': Δ''w''ij = ''k'' ''e'' ''o''j

''e''= error signal
''t''i = target output of ''j''
Δ''w''ij = the weight change between node ''i'' and ''j''
''w''ik = new weight calculated from error signal at output element
''k'' = learning rate
''o'' = the activation of node ''i'' / actual output of node ''i''
''o''j = the input vector / activation
The issue of catastrophic interference, comes about when learning is sequential. Sequential training involves the network learning an input-output patter until the error is reduced below a specific criterion, then training the network on another set of input-output patterns. Specifically, a backpropagation network will forget information if it first learns input ''A'' and then next learns input ''B''. It is not seen when learning is concurrent or interleaved. Interleaved training means the network learns the both inputs-output patterns at the same time, i.e. as ''AB''. Weights are only changed when the network is being trained and not when the network is being tested on its response.
To summarize, backpropagation networks:
*Involve three-layer neural networks with input, hidden and output units
*Use a supervised learning system
*Compare the actual output to the target output
*Backwards propagate the ''error signal'' to update weights across the layers
*Learn incrementally through weight updates and eventually settle on the correct output
*Have an issue with sequential learning

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Catastrophic interference」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.